Ensemble Learning
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (4 more...)
- Law Enforcement & Public Safety > Fraud (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Tax (1.00)
- (4 more...)
- North America > United States > Maryland (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- (2 more...)
Figure 1: Protein with random forest across 140 evaluations with different NN structure for distGP's
Thank you for all the reviewers time and effort. Thank you for your detailed review. Here, the idea is to re-train our model when new data is available. Here we explain our design space (see additional details in Appendix A.3, B and C); (i) Choice of embedding (joint vs Reviewer 3 Thank you for your review, and for comments regarding experiments, please see above. Thank you for your positive comments regarding the quality of the paper.
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.53)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.43)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.14)
- North America > United States > Wisconsin (0.04)
- North America > United States > Florida > Broward County (0.04)
- (3 more...)
- Information Technology > Data Science > Data Mining (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.30)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.48)
Statistical Inference for Explainable Boosting Machines
Fang, Haimo, Tan, Kevin, Pipping, Jonathan, Hooker, Giles
Explainable boosting machines (EBMs) are popular "glass-box" models that learn a set of univariate functions using boosting trees. These achieve explainability through visualizations of each feature's effect. However, unlike linear model coefficients, uncertainty quantification for the learned univariate functions requires computationally intensive bootstrapping, making it hard to know which features truly matter. We provide an alternative using recent advances in statistical inference for gradient boosting, deriving methods for statistical inference as well as end-to-end theoretical guarantees. Using a moving average instead of a sum of trees (Boulevard regularization) allows the boosting process to converge to a feature-wise kernel ridge regression. This produces asymptotically normal predictions that achieve the minimax-optimal mean squared error for fitting Lipschitz GAMs with $p$ features at rate $O(pn^{-2/3})$, successfully avoiding the curse of dimensionality. We then construct prediction intervals for the response and confidence intervals for each learned univariate function with a runtime independent of the number of datapoints, enabling further explainability within EBMs.
- North America > United States > Pennsylvania (0.04)
- Oceania > New Zealand (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Africa > Middle East > Morocco > Tanger-Tetouan-Al Hoceima Region > Tangier (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
Robust X-Learner: Breaking the Curse of Imbalance and Heavy Tails via Robust Cross-Imputation
Estimating Heterogeneous Treatment Effects (HTE) in industrial applications such as AdTech and healthcare presents a dual challenge: extreme class imbalance and heavy-tailed outcome distributions. While the X-Learner framework effectively addresses imbalance through cross-imputation, we demonstrate that it is fundamentally vulnerable to "Outlier Smearing" when reliant on Mean Squared Error (MSE) minimization. In this failure mode, the bias from a few extreme observations ("whales") in the minority group is propagated to the entire majority group during the imputation step, corrupting the estimated treatment effect structure. To resolve this, we propose the Robust X-Learner (RX-Learner). This framework integrates a redescending γ-divergence objective -- structurally equivalent to the Welsch loss under Gaussian assumptions -- into the gradient boosting machinery. We further stabilize the non-convex optimization using a Proxy Hessian strategy grounded in Majorization-Minimization (MM) principles. Empirical evaluation on a semi-synthetic Criteo Uplift dataset demonstrates that the RX-Learner reduces the Precision in Estimation of Heterogeneous Effect (PEHE) metric by 98.6% compared to the standard X-Learner, effectively decoupling the stable "Core" population from the volatile "Periphery".
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.67)
What Functions Does XGBoost Learn?
Ki, Dohyeong, Guntuboyina, Adityanand
This paper establishes a rigorous theoretical foundation for the function class implicitly learned by XGBoost, bridging the gap between its empirical success and our theoretical understanding. We introduce an infinite-dimensional function class $\mathcal{F}^{d, s}_{\infty-\text{ST}}$ that extends finite ensembles of bounded-depth regression trees, together with a complexity measure $V^{d, s}_{\infty-\text{XGB}}(\cdot)$ that generalizes the $L^1$ regularization penalty used in XGBoost. We show that every optimizer of the XGBoost objective is also an optimizer of an equivalent penalized regression problem over $\mathcal{F}^{d, s}_{\infty-\text{ST}}$ with penalty $V^{d, s}_{\infty-\text{XGB}}(\cdot)$, providing an interpretation of XGBoost as implicitly targeting a broader function class. We also develop a smoothness-based interpretation of $\mathcal{F}^{d, s}_{\infty-\text{ST}}$ and $V^{d, s}_{\infty-\text{XGB}}(\cdot)$ in terms of Hardy--Krause variation. We prove that the least squares estimator over $\{f \in \mathcal{F}^{d, s}_{\infty-\text{ST}}: V^{d, s}_{\infty-\text{XGB}}(f) \le V\}$ achieves a nearly minimax-optimal rate of convergence $n^{-2/3} (\log n)^{4(\min(s, d) - 1)/3}$, thereby avoiding the curse of dimensionality. Our results provide the first rigorous characterization of the function space underlying XGBoost, clarify its connection to classical notions of variation, and identify an important open problem: whether the XGBoost algorithm itself achieves minimax optimality over this class.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey > Bergen County > Hackensack (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (1.00)
Uncertainty-Adjusted Sorting for Asset Pricing with Machine Learning
Liu, Yan, Luo, Ye, Wang, Zigan, Zhang, Xiaowei
A large and rapidly expanding literature demonstrates that machine learning (ML) methods substantially improve out-of-sample asset return prediction relative to conventional linear benchmarks, and that these statistical gains often translate into economically meaningful portfolio performance. Seminal contributions such as Gu et al. (2020) document large Sharpe ratio improvements from nonlinear learners in U.S. equities, while subsequent work extends these findings to stochastic discount factor estimation (Chen et al. 2024), international equity markets (Leippold et al. 2022), and bond return forecasting (Kelly et al. 2019, Bianchi et al. 2020). Collectively, this literature establishes ML as a powerful tool for extracting conditional expected returns in environments characterized by noisy signals, nonlinear interactions, and pervasive multicollinearity.
- North America > United States (0.14)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- (2 more...)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.69)